AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Efficient distilled model

# Efficient distilled model

Kotoba Whisper V2.0
Apache-2.0
Kotoba-Whisper is a Japanese automatic speech recognition distilled model developed by Asahi Ushio in collaboration with Kotoba Technologies, based on Whisper large-v3 distillation, achieving a 6.3x inference speed improvement.
Speech Recognition Transformers Japanese
K
kotoba-tech
8,108
60
Sts Distilcamembert Base
MIT
This is a French sentence embedding model based on DistilCamemBERT, capable of encoding sentences or paragraphs into 768-dimensional vectors for tasks such as sentence similarity computation.
Text Embedding Transformers French
S
h4c5
48
1
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase